-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Add Initial Test Cases and Increase Coverage #32
Conversation
…checking, python support, and dependency install
…into PyPI-automation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dyga01, I tested the commands that were provided: poetry run task test
and poetry run task coverage
on a Mac and I got both the intended outputs!
poetry run task test
============================================================================ test session starts =============================================================================
platform darwin -- Python 3.12.6, pytest-8.3.3, pluggy-1.5.0 -- /Users/hemanialaparthi/Library/Caches/pypoetry/virtualenvs/execexam-nU8we_eK-py3.12/bin/python
cachedir: .pytest_cache
metadata: {'Python': '3.12.6', 'Platform': 'macOS-14.6.1-arm64-arm-64bit', 'Packages': {'pytest': '8.3.3', 'pluggy': '1.5.0'}, 'Plugins': {'json-report': '1.5.0', 'metadata': '3.1.1', 'randomly': '3.15.0', 'anyio': '4.6.0', 'cov': '4.1.0', 'clarity': '1.0.1', 'hypothesis': '6.112.1'}}
Using --randomly-seed=142332441
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase(PosixPath('/Users/hemanialaparthi/Desktop/cmpsc203/enchancement/execexam/.hypothesis/examples'))
rootdir: /Users/hemanialaparthi/Desktop/cmpsc203/enchancement/execexam
configfile: pytest.ini
plugins: json-report-1.5.0, metadata-3.1.1, randomly-3.15.0, anyio-4.6.0, cov-4.1.0, clarity-1.0.1, hypothesis-6.112.1
collected 46 items
tests/test_debug.py::test_get_debugging_messages PASSED
tests/test_debug.py::test_add_message PASSED
tests/test_debug.py::test_clear_messages PASSED
tests/test_debug.py::test_has_debugging_messages PASSED
tests/test_debug.py::test_enum_values PASSED
tests/test_debug.py::test_messages_list_initially_empty PASSED
tests/test_debug.py::test_debug_function PASSED
tests/test_convert.py::test_path_to_string PASSED
tests/test_util.py::test_determine_execexam_return_code_no_tests_collected PASSED
tests/test_util.py::test_determine_execexam_return_code_usage_error PASSED
tests/test_util.py::test_determine_execexam_return_code_other PASSED
tests/test_util.py::test_determine_execexam_return_code_tests_failed PASSED
tests/test_util.py::test_determine_execexam_return_code_internal_error PASSED
tests/test_util.py::test_determine_execexam_return_code_interrupted PASSED
tests/test_enumerations.py::test_report_type_enum_invalid_name PASSED
tests/test_enumerations.py::test_theme_enum_invalid_name PASSED
tests/test_enumerations.py::test_theme_enum_access_by_name PASSED
tests/test_enumerations.py::test_advice_method_enum_access_by_name PASSED
tests/test_enumerations.py::test_report_type_enum_access_by_name PASSED
tests/test_enumerations.py::test_report_type_enum_values PASSED
tests/test_enumerations.py::test_advice_method_enum_invalid_name PASSED
tests/test_enumerations.py::test_theme_enum_values PASSED
tests/test_enumerations.py::test_advice_method_enum_values PASSED
tests/test_display.py::test_display_advice PASSED
tests/test_display.py::test_make_colon_separated_string PASSED
tests/test_display.py::test_get_display_return_code PASSED
tests/test_pytest_plugin.py::test_pytest_runtest_logreport PASSED
tests/test_pytest_plugin.py::test_pytest_assertion_pass_no_assertions PASSED
tests/test_pytest_plugin.py::test_pytest_assertion_pass_no_report PASSED
tests/test_pytest_plugin.py::test_pytest_assertion_pass PASSED
tests/test_pytest_plugin.py::test_pytest_runtest_logreport_not_call PASSED
tests/test_extract.py::test_no_labels PASSED
tests/test_extract.py::test_extract_details PASSED
tests/test_extract.py::test_is_failing_test_details_empty_with_non_empty_string PASSED
tests/test_extract.py::test_multiple_labels PASSED
tests/test_extract.py::test_extract_details_hypothesis PASSED
tests/test_extract.py::test_extract_test_output_without_label PASSED
tests/test_extract.py::test_single_label PASSED
tests/test_extract.py::test_extract_failing_test_details PASSED
tests/test_extract.py::test_extract_test_assertions_details PASSED
tests/test_extract.py::test_extract_test_assertion_details_list PASSED
tests/test_extract.py::test_extract_test_run_details PASSED
tests/test_extract.py::test_extract_test_output_with_label PASSED
tests/test_extract.py::test_is_failing_test_details_empty_with_empty_string PASSED
tests/test_extract.py::test_is_failing_test_details_empty_with_newline PASSED
tests/test_extract.py::test_extract_test_assertion_details PASSED
============================================================================= 46 passed in 0.17s =============================================================================
poetry run task coverage
============================================================================ test session starts =============================================================================
platform darwin -- Python 3.12.6, pytest-8.3.3, pluggy-1.5.0
Using --randomly-seed=1175480625
rootdir: /Users/hemanialaparthi/Desktop/cmpsc203/enchancement/execexam
configfile: pytest.ini
plugins: json-report-1.5.0, metadata-3.1.1, randomly-3.15.0, anyio-4.6.0, cov-4.1.0, clarity-1.0.1, hypothesis-6.112.1
collected 46 items
tests/test_pytest_plugin.py .....
tests/test_util.py ......
tests/test_enumerations.py .........
tests/test_extract.py ...............
tests/test_display.py ...
tests/test_debug.py .......
tests/test_convert.py .
---------- coverage: platform darwin, python 3.12.6-final-0 ----------
Name Stmts Miss Branch BrPart Cover Missing
-------------------------------------------------------------------------
execexam/__init__.py 0 0 0 0 100%
execexam/convert.py 7 0 2 0 100%
execexam/debug.py 22 0 4 0 100%
execexam/enumerations.py 17 0 0 0 100%
execexam/extract.py 80 0 29 1 99% 89->80
execexam/util.py 14 0 10 0 100%
tests/__init__.py 0 0 0 0 100%
tests/test_advise.py 0 0 0 0 100%
tests/test_convert.py 12 0 0 0 100%
tests/test_debug.py 32 0 2 0 100%
tests/test_display.py 46 0 8 0 100%
tests/test_enumerations.py 41 0 6 0 100%
tests/test_extract.py 85 0 12 0 100%
tests/test_main.py 0 0 0 0 100%
tests/test_pytest_plugin.py 67 0 8 1 99% 18->20
tests/test_util.py 14 0 0 0 100%
-------------------------------------------------------------------------
TOTAL 437 0 81 2 99%
Coverage JSON written to file coverage.json
Required test coverage of 50% reached. Total coverage: 99.61%
============================================================================= 46 passed in 0.32s =============================================================================
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR needs some changes before approval, to start
- coverage outputs are displayed incorrectly, leading me to believe that these files are not being tested correctly by pytest. Here is what our
poetry run task coverage
test looks like on execexam:
- Here is an example from a previous project, Chasten as to what it should look like:
- Another issue is that the
test_coverage.py
file contains the same functions that it is testing. This file should import from thecoverage.py
file instead.
How does this PR connect to PR #38? |
@dyga01 please note that there are conflicts in this branch. |
@dyga01 can this PR be closed? |
Pull Request: Add additional testing files and functions for execexam, so we have greater code coverage.
Aidan Dyga (@dyga01), Hannah Brown (@hannahb09 ), Coltin Colucci (@Coltin2121)
Additional and Improved Tests and Fuzzing #4
test, enhancement
This pull request aims to add a large amount of testing files and functions. Most of the tests were created with Pytest, with some Hypothesis test cases as well. Overall, this pull request mainly aims to add initial test cases for a lot of features.
Coverage of the test cases in our branch is currently at 99%. However, this number is expected to drop as our tests are integrated with the rest of the updated tool. Once most of the feature PRs have been added, we will work to improve the coverage again.
The tests have been conducted on a Mac. It would be great if I could have a Linux and Windows user test it out.
Here is the output of running the following commands on Mac.
a. poetry run task test
b. poetry run task coverage